213 research outputs found

    See-and-avoid quadcopter using fuzzy control optimized by cross-entropy

    Get PDF
    In this work we present an optimized fuzzy visual servoing system for obstacle avoidance using an unmanned aerial vehicle. The cross-entropy theory is used to optimise the gains of our controllers. The optimization process was made using the ROS-Gazebo 3D simulation with purposeful extensions developed for our experiments. Visual servoing is achieved through an image processing front-end that uses the Camshift algorithm to detect and track objects in the scene. Experimental flight trials using a small quadrotor were performed to validate the parameters estimated from simulation. The integration of cross- entropy methods is a straightforward way to estimate optimal gains achieving excellent results when tested in real flights

    Discernment of bee pollen loads using computer vision and one-class classification techniques

    Get PDF
    In this paper, we propose a system for authenticating local bee pollen against fraudulent samples using image processing and classification techniques. Our system is based on the colour properties of bee pollen loads and the use of one-class classifiers to reject unknown pollen samples. The latter classification techniques allow us to tackle the major difficulty of the problem, the existence of many possible fraudulent pollen types. Also presented is a multi-classifier model with an ambiguity discovery process to fuse the output of the one-class classifiers. The method is validated by authenticating Spanish bee pollen types, the overall accuracy of the final system of being 94%. Therefore, the system is able to rapidly reject the non-local pollen samples with inexpensive hardware and without the need to send the product to the laboratory

    Efficient Visual Odometry and Mapping for Unmanned Aerial Vehicle Using ARM-based Stereo Vision Pre-Processing System

    Full text link
    Visual odometry and mapping methods can provide accurate navigation and comprehensive environment (obstacle) information for autonomous flights of Unmanned Aerial Vehicle (UAV) in GPS-denied cluttered environments. This work presents a new light small-scale low-cost ARM-based stereo vision pre-processing system, which not only is used as onboard sensor to continuously estimate 6-DOF UAV pose, but also as onboard assistant computer to pre-process visual information, thereby saving more computational capability for the onboard host computer of the UAV to conduct other tasks. The visual odometry is done by one plugin specifically developed for this new system with a fixed baseline (12cm). In addition, the preprocessed infromation from this new system are sent via a Gigabit Ethernet cable to the onboard host computer of UAV for real-time environment reconstruction and obstacle detection with a octree-based 3D occupancy grid mapping approach, i.e. OctoMap. The visual algorithm is evaluated with the stereo video datasets from EuRoC Challenge III in terms of efficiency, accuracy and robustness. Finally, the new system is mounted and tested on a real quadrotor UAV to carry out the visual odometry and mapping task

    A pan-tilt camera Fuzzy vision controller on an unmanned aerial vehicle

    Get PDF
    is paper presents an implementation of two Fuzzy Logic controllers working in parallel for a pan-tilt camera platform on an UAV. This implementation uses a basic Lucas-Kanade tracker algorithm, which sends information about the error between the center of the object to track and the center of the image, to the Fuzzy controller. This information is enough for the controller, to follow the object moving a two axis servo-platform, besides the UAV vibrations and movements. The two Fuzzy controllers of each axis, work with a rules-base of 49 rules, two inputs and one output with a more significant sector defined to improve the behavior of those

    A Review of Deep Learning Methods and Applications for Unmanned Aerial Vehicles

    Get PDF
    Deep learning is recently showing outstanding results for solving a wide variety of robotic tasks in the areas of perception, planning, localization, and control. Its excellent capabilities for learning representations from the complex data acquired in real environments make it extremely suitable for many kinds of autonomous robotic applications. In parallel, Unmanned Aerial Vehicles (UAVs) are currently being extensively applied for several types of civilian tasks in applications going from security, surveillance, and disaster rescue to parcel delivery or warehouse management. In this paper, a thorough review has been performed on recent reported uses and applications of deep learning for UAVs, including the most relevant developments as well as their performances and limitations. In addition, a detailed explanation of the main deep learning techniques is provided. We conclude with a description of the main challenges for the application of deep learning for UAV-based solutions

    Survey of Bayesian Networks Applications to Intelligent Autonomous Vehicles

    Full text link
    This article reviews the applications of Bayesian Networks to Intelligent Autonomous Vehicles (IAV) from the decision making point of view, which represents the final step for fully Autonomous Vehicles (currently under discussion). Until now, when it comes making high level decisions for Autonomous Vehicles (AVs), humans have the last word. Based on the works cited in this article and analysis done here, the modules of a general decision making framework and its variables are inferred. Many efforts have been made in the labs showing Bayesian Networks as a promising computer model for decision making. Further research should go into the direction of testing Bayesian Network models in real situations. In addition to the applications, Bayesian Network fundamentals are introduced as elements to consider when developing IAVs with the potential of making high level judgement calls.Comment: 34 pages, 2 figures, 3 table

    SIGS: Synthetic Imagery Generating Software for the development and evaluation of vision-based sense-and-avoid systems

    Get PDF
    Unmanned Aerial Systems (UASs) have recently become a versatile platform for many civilian applications including inspection, surveillance and mapping. Sense-and-Avoid systems are essential for the autonomous safe operation of these systems in non-segregated airspaces. Vision-based Sense-and-Avoid systems are preferred to other alternatives as their price, physical dimensions and weight are more suitable for small and medium-sized UASs, but obtaining real flight imagery of potential collision scenarios is hard and dangerous, which complicates the development of Vision-based detection and tracking algorithms. For this purpose, user-friendly software for synthetic imagery generation has been developed, allowing to blend user-defined flight imagery of a simulated aircraft with real flight scenario images to produce realistic images with ground truth annotations. These are extremely useful for the development and benchmarking of Vision-based detection and tracking algorithms at a much lower cost and risk. An image processing algorithm has also been developed for automatic detection of the occlusions caused by certain parts of the UAV which carries the camera. The detected occlusions can later be used by our software to simulate the occlusions due to the UAV that would appear in a real flight with the same camera setup. Additionally this algorithm could be used to mask out pixels which do not contain relevant information of the scene for the visual detection, making the image search process more efficient. Finally an application example of the imagery obtained with our software for the benchmarking of a state-of-art visual tracker is presented

    A Ground-Truth Video Dataset for the Development and Evaluation of Vision-based Sense-and-Avoid systems

    Full text link
    The importance of vision-based systems for Sense-and-Avoid is increasing nowadays as remotely piloted and autonomous UAVs become part of the non-segregated airspace. The development and evaluation of these systems demand flight scenario images which are expensive and risky to obtain. Currently Augmented Reality techniques allow the compositing of real flight scenario images with 3D aircraft models to produce useful realistic images for system development and benchmarking purposes at a much lower cost and risk. With the techniques presented in this paper, 3D aircraft models are positioned firstly in a simulated 3D scene with controlled illumination and rendering parameters. Realistic simulated images are then obtained using an image processing algorithm which fuses the images obtained from the 3D scene with images from real UAV flights taking into account on board camera vibrations. Since the intruder and camera poses are user-defined, ground truth data is available. These ground truth annotations allow to develop and quantitatively evaluate aircraft detection and tracking algorithms. This paper presents the software developed to create a public dataset of 24 videos together with their annotations and some tracking application results
    corecore